Goto

Collaborating Authors

 sb 1047


Why AI Regulation Has Become a 'States' Rights' Issue

TIME - Tech

Regardless of the outcome, the battle reflects both the AI industry's influence in Washington and the heightened anxieties that influence is causing among many different coalitions. Here's how the battle lines are being drawn, and why key Republicans are defecting from the party line. Congress has been notoriously slow to pass any sort of tech regulation in the past two decades. As a result, states have filled the void, passing bills that regulate biometric data and child online safety. The same has held true for AI: as the industry has surged in usage, hundreds of AI bills have been proposed in states, with dozens enacted into law.


California AI Policy Report Warns of 'Irreversible Harms'

TIME - Tech

While AI could offer transformative benefits, without proper safeguards it could facilitate nuclear and biological threats and cause "potentially irreversible harms," a new report commissioned by California Governor Gavin Newsom has warned. "The opportunity to establish effective AI governance frameworks may not remain open indefinitely," says the report, which was published on June 17. Citing new evidence that AI can help users source nuclear-grade uranium and is on the cusp of letting novices create biological threats, it notes that the cost for inaction at this current moment could be "extremely high." The 53-page document stems from a working group established by Governor Newsom, in a state that has emerged as a central arena for AI legislation. With no comprehensive federal regulation on the horizon, state-level efforts to govern the technology have taken on outsized significance, particularly in California, which is home to many of the world's top AI companies.


California's AI Act Vetoed

Communications of the ACM

Under SB 1047, developers of very large frontier models (defined as models trained on computing power greater than 1026 integer or floating point operations or costing more than 100 million at the start of training) and those who fine-tune large frontier models (also measured by compute requirements and/or training costs) would be responsible to ensure that these models will not cause "critical harms." Other comparably grave harms to public safety and security. Under this bill, developers of large frontier models would be required to take numerous steps at three phases of development: some before training, some before use of such a model or making it available, and some during uses of covered models. Among the required steps would be installing a "kill switch" at the pre-training stage, taking reasonable measures to prevent models from posing unreasonable risks, and publishing redacted copies of the developers' safety and security protocols. Developers would also be required to hire independent third-party auditors to ensure compliance with the law's requirements.


Comparative Global AI Regulation: Policy Perspectives from the EU, China, and the US

Chun, Jon, de Witt, Christian Schroeder, Elkins, Katherine

arXiv.org Artificial Intelligence

As a powerful and rapidly advancing dual-use technology, AI offers both immense benefits and worrisome risks. In response, governing bodies around the world are developing a range of regulatory AI laws and policies. This paper compares three distinct approaches taken by the EU, China and the US. Within the US, we explore AI regulation at both the federal and state level, with a focus on California's pending Senate Bill 1047. Each regulatory system reflects distinct cultural, political and economic perspectives. Each also highlights differing regional perspectives on regulatory risk-benefit tradeoffs, with divergent judgments on the balance between safety versus innovation and cooperation versus competition. Finally, differences between regulatory frameworks reflect contrastive stances in regards to trust in centralized authority versus trust in a more decentralized free market of self-interested stakeholders. Taken together, these varied approaches to AI innovation and regulation influence each other, the broader international community, and the future of AI regulation.


Gavin Newsom Blocks Contentious AI Safety Bill in California

TIME - Tech

California Governor Gavin Newsom has vetoed what would have become one of the most comprehensive policies governing the safety of artificial intelligence in the U.S. The bill would've been among the first to hold AI developers accountable for any severe harm caused by their technologies. It drew fierce criticism from some prominent Democrats and major tech firms, including ChatGPT creator OpenAI and venture capital firm Andreessen Horowitz, who warned it could stall innovation in the state. Newsom described the legislation as "well-intentioned" but said in a statement that it would've applied "stringent standards to even the most basic functions." Regulation should be based on "empirical evidence and science," he said, pointing to his own executive order on AI and other bills he's signed that regulate the technology around known risks such as deepfakes. The debate around California's SB 1047 bill highlights the challenge that lawmakers around the world are facing in controlling the risks of AI while also supporting the emerging technology.


Careful not to stifle innovation, Newsom hesitates on major tech bills

Los Angeles Times

Backstage at one of the largest artificial intelligence conferences in the world, Gov. Gavin Newsom listened to two leaders in the field debate opposite views of a high-profile bill on his desk to protect Californians from the technology. "Honestly, I take advantage of opportunities like this," Newsom said recounting the exchange later during an interview at the Salesforce conference in San Francisco in mid-September. "I just watched them, and I was like, 'Here we go. Should I sign it, or should I not?' Then'absolutely,' 'absolutely not' and back and forth." The scene offered a peek into Newsom's deliberations on regulating the tech industry, including an explosion of AI companies, and the forces seeking to influence him during bill-signing season at the state Capitol.


California Gov. Newsom vetoes bill SB 1047 that aims to prevent AI disasters

Engadget

California Gov. Gavin Newsom has vetoed bill SB 1047, which aims to prevent bad actors from using AI to cause "critical harm" to humans. The California state assembly passed the legislation by a margin of 41-9 on August 28, but several organizations including the Chamber of Commerce had urged Newsom to veto the bill. In his veto message on Sept. 29, Newsom said the bill is "well-intentioned" but "does not take into account whether an AI system is deployed in high-risk environments, involves critical decision-making or the use of sensitive data. Instead, the bill applies stringent standards to even the most basic functions - so long as a large system deploys it." SB 1047 would have made the developers of AI models liable for adopting safety protocols that would stop catastrophic uses of their technology.


Gov. Gavin Newsom vetoes AI safety bill opposed by Silicon Valley

Los Angeles Times

Gov. Gavin Newsom on Sunday vetoed SB 1047, an artificial intelligence safety bill that would have established requirements for developers of advanced AI models to create protocols aimed at preventing catastrophes. The bill, introduced by Sen. Scott Wiener (D-San Francisco), would have required developers to submit their safety plans to the state attorney general, who could hold them liable if AI models they directly control were to cause harm or imminent threats to public safety. Additionally, the legislation would have required tech firms to be able to turn off the AI models they directly control if things went awry. In his veto message, Newsom said the legislation could give the public a "false sense of security about controlling this fast-moving technology" because it targeted only large-scale and expensive AI models and not smaller, specialized systems. "While well-intentioned, SB 1047 does not take into account whether an AI system is deployed in high-risk environments, involves critical decision-making or the use of sensitive data," Newsom's veto message stated.


Opinion: California's AI safety bill is under fire. Making it law is the best way to improve it

Los Angeles Times

On Aug. 29, the California Legislature passed Senate Bill 1047 -- the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act -- and sent it to Gov. Gavin Newsom for signature. Newsom's choice, due by Sept. 30, is binary: Kill it or make it law. Acknowledging the possible harm that could come from advanced AI, SB 1047 requires technology developers to integrate safeguards as they develop and deploy what the bill calls "covered models." The California attorney general can enforce these requirements by pursuing civil actions against parties that aren't taking "reasonable care" that 1) their models won't cause catastrophic harms, or 2) their models can be shut down in case of emergency. Legislation from State Sen. Scott Wiener would introduce standards for product safety testing and liability. Many prominent AI companies oppose the bill either individually or through trade associations.


California's Draft AI Law Would Protect More than Just People

TIME - Tech

Few places in the world have more to gain from a flourishing AI industry than California. Few also have more to lose if the public's trust in the industry were suddenly shattered. In May, the California Senate passed SB 1047, a piece of AI safety legislation, in a vote of 32 to one, helping ensure the safe development of large-scale AI systems through clear, predictable, common-sense safety standards. The bill is now slated for a state assembly vote this week and, if signed into law by Governor Gavin Newsom, would represent a significant step in protecting California citizens and the state's burgeoning AI industry from malicious use. Late Monday, Elon Musk shocked many by announcing his support for the bill in a post on X. "This is a tough call and will make some people upset, but, all things considered, I think California should probably pass the SB 1047 AI safety bill," he wrote.